- Elon Musk said AI will likely be smarter than all humans combined by 2029.
- Meta's AI chief scientist snapped back, taking a dig at Musk's self-driving car system.
- Yann LeCun said AI cannot teach itself — a crucial gap in our understanding of the technology.
A leading figure rejected Elon Musk's claims that AI could soon outsmart all of humanity.
Yann LeCun, Meta's AI lead, snapped at Musk when he said AI "will probably be smarter than any single human next year."
"By 2029, AI is probably smarter than all humans combined," the billionaire and adamant AI fan supporter of AI, said on X last week.
LeCun, the chief AI scientist at Meta who is often called one of the "godfathers of AI," shot down Musk's argument, taking a dig at Tesla's self-driving cars in the making.
"No," he said on X. "If it were the case, we would have AI systems that could teach themselves to drive a car in 20 hours of practice, like any 17 year-old.
"But we still don't have fully autonomous, reliable self-driving, even though we (you) have millions of hours of labeled training data," LeCun added.
Cats are cleverer
It's not the first time LeCun has spoken out about claims of intelligent AI being around the corner.
"We are really far from human-level intelligence. There were stories about the fact that you could use an LLM (large language model) to give you instructions of how to make chemical weapon or bioweapon. That turns out to be false," he told the World Government Summit in Dubai last month in comments reported by The Observer.
The French-American scientist said AI models have about as much computing power as a common housecat's brain — but were not nearly as clever.
"Why aren't those systems as smart as a cat?" LeCun asked. "A cat can remember, can understand the physical world, can plan complex actions, can do some level of reasoning — actually much better than the biggest LLMs," he said.
For him, it's not just a matter of scaling up the existing technology; there's a secret ingredient missing keeping machine learning and language learning models back.
"That tells you we are missing something conceptually big to get machines to be as intelligent as animals and humans," per LeCun.
"Some time in the future those systems might be actually smart enough to give you useful information better than you can get with a search engine. But it's just not true today."
Musk, on the other hand, has been vocal about his belief that AI is so powerful it could be dangerous, stating that it could develop super-intelligence that could overpower human systems.
"It's a small likelihood of annihilating humanity, but it's not zero," the billionaire previously told The Wall Street Journal.
"There is a risk that advanced AI either eliminates or constrains humanity's growth," he previously said.
A longtime supporter of machine learning, Musk has banked on putting AI at the center of the development of his self-driving Tesla cars.
Musk said in 2019 that self-driving Teslas could be dispatched as "robotaxis" by 2020. That goalpost has since been repeatedly pushed back, Business Insider previously reported.
Musk is also notably a cofounder and former board member of OpenAI, a company LeCun has openly criticized in the past.